Goto

Collaborating Authors

 pedagogical agent


Personalizing Student-Agent Interactions Using Log-Contextualized Retrieval Augmented Generation (RAG)

Cohn, Clayton, Rayala, Surya, Snyder, Caitlin, Fonteles, Joyce, Jain, Shruti, Mohammed, Naveeduddin, Timalsina, Umesh, Burriss, Sarah K., S, Ashwin T, Srivastava, Namrata, Deweese, Menton, Eeds, Angela, Biswas, Gautam

arXiv.org Artificial Intelligence

Collaborative dialogue offers rich insights into students' learning and critical thinking, which is essential for personalizing pedagogical agent interactions in STEM+C settings. While large language models (LLMs) facilitate dynamic pedagogical interactions, hallucinations undermine confidence, trust, and instructional value. Retrieval-augmented generation (RAG) grounds LLM outputs in curated knowledge but requires a clear semantic link between user input and a knowledge base, which is often weak in student dialogue. We propose log-contextualized RAG (LC-RAG), which enhances RAG retrieval by using environment logs to contextualize collaborative discourse. Our findings show that LC-RAG improves retrieval over a discourse-only baseline and allows our collaborative peer agent, Copa, to deliver relevant, personalized guidance that supports students' critical thinking and epistemic decision-making in a collaborative computational modeling environment, C2STEM.


VTutor: An Open-Source SDK for Generative AI-Powered Animated Pedagogical Agents with Multi-Media Output

Chen, Eason, Lin, Chenyu, Tang, Xinyi, Xi, Aprille, Wang, Canwen, Lin, Jionghao, Koedinger, Kenneth R

arXiv.org Artificial Intelligence

The rapid evolution of large language models (LLMs) has transformed human-computer interaction (HCI), but the interaction with LLMs is currently mainly focused on text-based interactions, while other multi-model approaches remain under-explored. This paper introduces VTutor, an open-source Software Development Kit (SDK) that combines generative AI with advanced animation technologies to create engaging, adaptable, and realistic APAs for human-AI multi-media interactions. VTutor leverages LLMs for real-time personalized feedback, advanced lip synchronization for natural speech alignment, and WebGL rendering for seamless web integration. Supporting various 2D and 3D character models, VTutor enables researchers and developers to design emotionally resonant, contextually adaptive learning agents. This toolkit enhances learner engagement, feedback receptivity, and human-AI interaction while promoting trustworthy AI principles in education. VTutor sets a new standard for next-generation APAs, offering an accessible, scalable solution for fostering meaningful and immersive human-AI interaction experiences. The VTutor project is open-sourced and welcomes community-driven contributions and showcases.


Education in the Era of Neurosymbolic AI

Jaldi, Chris Davis, Ilkou, Eleni, Schroeder, Noah, Shimizu, Cogan

arXiv.org Artificial Intelligence

Education is poised for a transformative shift with the advent of neurosymbolic artificial intelligence (NAI), which will redefine how we support deeply adaptive and personalized learning experiences. NAI-powered education systems will be capable of interpreting complex human concepts and contexts while employing advanced problem-solving strategies, all grounded in established pedagogical frameworks. This will enable a level of personalization in learning systems that to date has been largely unattainable at scale, providing finely tailored curricula that adapt to an individual's learning pace and accessibility needs, including the diagnosis of student understanding of subjects at a fine-grained level, identifying gaps in foundational knowledge, and adjusting instruction accordingly. In this paper, we propose a system that leverages the unique affordances of pedagogical agents -- embodied characters designed to enhance learning -- as critical components of a hybrid NAI architecture. To do so, these agents can thus simulate nuanced discussions, debates, and problem-solving exercises that push learners beyond rote memorization toward deep comprehension. We discuss the rationale for our system design and the preliminary findings of our work. We conclude that education in the era of NAI will make learning more accessible, equitable, and aligned with real-world skills. This is an era that will explore a new depth of understanding in educational tools.


Learning with Digital Agents: An Analysis based on the Activity Theory

Dolata, Mateusz, Katsiuba, Dzmitry, Wellnhammer, Natalie, Schwabe, Gerhard

arXiv.org Artificial Intelligence

Digital agents are considered a general-purpose technology. They spread quickly in private and organizational contexts, including education. Yet, research lacks a conceptual framing to describe interaction with such agents in a holistic manner. While focusing on the interaction with a pedagogical agent, i.e., a digital agent capable of natural-language interaction with a learner, we propose a model of learning activity based on activity theory. We use this model and a review of prior research on digital agents in education to analyze how various characteristics of the activity, including features of a pedagogical agent or learner, influence learning outcomes. The analysis leads to identification of IS research directions and guidance for developers of pedagogical agents and digital agents in general. We conclude by extending the activity theory-based model beyond the context of education and show how it helps designers and researchers ask the right questions when creating a digital agent.


The Effects of Embodiment and Personality Expression on Learning in LLM-based Educational Agents

Sonlu, Sinan, Bendiksen, Bennie, Durupinar, Funda, Güdükbay, Uğur

arXiv.org Artificial Intelligence

This work investigates how personality expression and embodiment affect personality perception and learning in educational conversational agents. We extend an existing personality-driven conversational agent framework by integrating LLM-based conversation support tailored to an educational application. We describe a user study built on this system to evaluate two distinct personality styles: high extroversion and agreeableness and low extroversion and agreeableness. For each personality style, we assess three models: (1) a dialogue-only model that conveys personality through dialogue, (2) an animated human model that expresses personality solely through dialogue, and (3) an animated human model that expresses personality through both dialogue and body and facial animations. The results indicate that all models are positively perceived regarding both personality and learning outcomes. Models with high personality traits are perceived as more engaging than those with low personality traits. We provide a comprehensive quantitative and qualitative analysis of perceived personality traits, learning parameters, and user experiences based on participant ratings of the model types and personality styles, as well as users' responses to open-ended questions.


Comparing Photorealistic and Animated Embodied Conversational Agents in Serious Games: An Empirical Study on User Experience

Korre, Danai

arXiv.org Artificial Intelligence

Embodied conversational agents (ECAs) are paradigms of conversational user interfaces in the form of embodied characters. While ECAs offer various manipulable features, this paper focuses on a study conducted to explore two distinct levels of presentation realism. The two agent versions are photorealistic and animated. The study aims to provide insights and design suggestions for speech-enabled ECAs within serious game environments. A within-subjects, two-by-two factorial design was employed for this research with a cohort of 36 participants balanced for gender. The results showed that both the photorealistic and the animated versions were perceived as highly usable, with overall mean scores of 5.76 and 5.71, respectively. However, 69.4 per cent of the participants stated they preferred the photorealistic version, 25 per cent stated they preferred the animated version and 5.6 per cent had no stated preference. The photorealistic agents were perceived as more realistic and human-like, while the animated characters made the task feel more like a game. Even though the agents' realism had no significant effect on usability, it positively influenced participants' perceptions of the agent. This research aims to lay the groundwork for future studies on ECA realism's impact in serious games across diverse contexts.


Generative AI for learning: Investigating the potential of synthetic learning videos

Leiker, Daniel, Gyllen, Ashley Ricker, Eldesouky, Ismail, Cukurova, Mutlu

arXiv.org Artificial Intelligence

Recent advances in generative artificial intelligence (AI) have captured worldwide attention. Tools such as Dalle-2 and ChatGPT suggest that tasks previously thought to be beyond the capabilities of AI may now augment the productivity of creative media in various new ways, including through the generation of synthetic video. This research paper explores the utility of using AI-generated synthetic video to create viable educational content for online educational settings. To date, there is limited research investigating the real-world educational value of AI-generated synthetic media. To address this gap, we examined the impact of using AI-generated synthetic video in an online learning platform on both learners content acquisition and learning experience. We took a mixed-method approach, randomly assigning adult learners (n=83) into one of two micro-learning conditions, collecting pre- and post-learning assessments, and surveying participants on their learning experience. The control condition included a traditionally produced instructor video, while the experimental condition included a synthetic video with a realistic AI-generated character. The results show that learners in both conditions demonstrated significant improvement from pre- to post-learning (p<.001), with no significant differences in gains between the two conditions (p=.80). In addition, no differences were observed in how learners perceived the traditional and synthetic videos. These findings suggest that AI-generated synthetic learning videos have the potential to be a viable substitute for videos produced via traditional methods in online educational settings, making high quality educational content more accessible across the globe.


Pedagogical Agents: Back to the Future

AI Magazine

Back in the 1990s we started work on pedagogical agents, a new user interface paradigm for interactive learning environments. Pedagogical agents are autonomous characters that inhabit learning environments and can engage with learners in rich, face-to-face interactions. Building on this work, in 2000 we, together with our colleague, Jeff Rickel, published an article on pedagogical agents that surveyed this new paradigm and discussed its potential. We made the case that pedagogical agents that interact with learners in natural, life-like ways can help learning environments achieve improved learning outcomes. This article has been widely cited, and was a winner of the 2017 IFAAMAS Award for Influential Papers in Autonomous Agents and Multiagent Systems (IFAAMAS, 2017).


Pedagogical Agents: Back to the Future

Johnson, W. Lewis (Alelo Inc.) | Lester, James C. (North Carolina State University)

AI Magazine

Back in the 1990s we started work on pedagogical agents, a new user interface paradigm for interactive learning environments. Pedagogical agents are autonomous characters that inhabit learning environments and can engage with learners in rich, face-to-face interactions. Building on this work, in 2000 we, together with our colleague, Jeff Rickel, published an article on pedagogical agents that surveyed this new paradigm and discussed its potential. We made the case that pedagogical agents that interact with learners in natural, life-like ways can help learning environments achieve improved learning outcomes. This article has been widely cited, and was a winner of the 2017 IFAAMAS Award for Influential Papers in Autonomous Agents and Multiagent Systems (IFAAMAS, 2017). On the occasion of receiving the IFAAMAS award, and after twenty years of work on pedagogical agents, we decided to take another look at the future of the field. We’ll start by revisiting our predictions for pedagogical agents back in 2000, and examine which of those predictions panned out. Then, informed what we have learned since then, we will take another look at emerging trends and the future of pedagogical agents. Advances in natural language dialogue, affective computing, machine learning, virtual environments, and robotics are making possible even more lifelike and effective pedagogical agents, with potentially profound effects on the way people learn.


Pedagogical Agent Research at CARTE

AI Magazine

This article gives an overview of current research on animated pedagogical agents at the Center for Advanced Research in Technology for Education (CARTE) at the University of Southern California/Information Sciences Institute. Animated pedagogical agents, nicknamed guidebots, interact with learners to help keep learning activities on track. They combine the pedagogical expertise of intelligent tutoring systems with the interpersonal interaction capabilities of embodied conversational characters. They can support the acquisition of team skills as well as skills performed alone by individuals. At CARTE, we have been developing guidebots that help learners acquire a variety of problem-solving skills in virtual worlds, in multimedia environments, and on the web.